Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
💾 CPU Caching
L1/L2/L3, Cache Lines, False Sharing, Alignment
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
28874
posts in
56.7
ms
How to
Compress
KV
Cache in RL Post-Training? Shadow Mask Distillation for Memory-Efficient Alignment
🗜️
Vector Compression
arxiv.org
·
1d
eOS-DeepContext
🏗️
LLM Infrastructure
eoscontinuum.com
·
4d
Verbalised
evaluation awareness in language models has little effect on their
behaviour
🏆
LLM Benchmarking
lesswrong.com
·
9h
mission
alignment
💾
Persistence Strategies
manton.org
·
1d
Model Spec
Midtraining
: Improving How Alignment Training
Generalizes
🏗️
LLM Infrastructure
alignment.anthropic.com
·
5d
·
Hacker News
,
r/artificial
Context
Modification
as a
Negative
Alignment Tax
🧠
LLM Inference
lesswrong.com
·
2d
Metis
: Learning to Jailbreak LLMs via Self-Evolving
Metacognitive
Policy Optimization
🕳
LLM Vulnerabilities
arxiv.org
·
12h
Self-ReSET: Learning to Self-Recover from
Unsafe
Reasoning
Trajectories
🛡️
AI Safety
arxiv.org
·
12h
Userland
Alignment
🛡️
AI Safety
lesswrong.com
·
4d
A Single
Neuron
Is
Sufficient
to Bypass Safety Alignment in Large Language Models
🛡️
AI Safety
arxiv.org
·
12h
Monday AI
Radar
#24
🆕
New AI
lesswrong.com
·
6d
Confidence-Aware
Alignment
Makes Reasoning LLMs More
Reliable
🧠
LLM Inference
arxiv.org
·
1d
Personalized Alignment Revisited: The
Necessity
and
Sufficiency
of User Diversity
👤
Search Personalization
arxiv.org
·
12h
Topology-Enhanced
Alignment for Large Language Models: Trajectory
Topology
Loss and Topological
Preference
Optimization
📉
Embeddings Optimization
arxiv.org
·
1d
LLM-Agnostic
Semantic
Representation
Attack
🕳
LLM Vulnerabilities
arxiv.org
·
12h
Phoenix-VL
1.5 Medium Technical Report
🦙
Ollama
arxiv.org
·
12h
Flow-OPD
: On-Policy
Distillation
for Flow Matching Models
📊
Model Serving Economics
arxiv.org
·
1d
Why Do DiT
Editors
Drift? Plug-and-Play Low Frequency Alignment in
VAE
Latent Space
✨
Gemini
arxiv.org
·
12h
Safety
Anchor
: Defending Harmful Fine-tuning via Geometric
Bottlenecks
🛡️
AI Safety
arxiv.org
·
4d
Response Time
Enhances
Alignment with Heterogeneous
Preferences
👤
Search Personalization
arxiv.org
·
1d
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help